Goto

Collaborating Authors

 society and culture


The Curious Case of Curiosity across Human Cultures and LLMs

Borah, Angana, Jin, Zhijing, Mihalcea, Rada

arXiv.org Artificial Intelligence

Recent advances in Large Language Models (LLMs) have expanded their role in human interaction, yet curiosity -- a central driver of inquiry -- remains underexplored in these systems, particularly across cultural contexts. In this work, we investigate cultural variation in curiosity using Yahoo! Answers, a real-world multi-country dataset spanning diverse topics. We introduce CUEST (CUriosity Evaluation across SocieTies), an evaluation framework that measures human-model alignment in curiosity through linguistic (style), topic preference (content) analysis and grounding insights in social science constructs. Across open- and closed-source models, we find that LLMs flatten cross-cultural diversity, aligning more closely with how curiosity is expressed in Western countries. We then explore fine-tuning strategies to induce curiosity in LLMs, narrowing the human-model alignment gap by up to 50%. Finally, we demonstrate the practical value of curiosity for LLM adaptability across cultures, showing its importance for future NLP research.


Prompt Design Matters for Computational Social Science Tasks but in Unpredictable Ways

Atreja, Shubham, Ashkinaze, Joshua, Li, Lingyao, Mendelsohn, Julia, Hemphill, Libby

arXiv.org Artificial Intelligence

Manually annotating data for computational social science tasks can be costly, time-consuming, and emotionally draining. While recent work suggests that LLMs can perform such annotation tasks in zero-shot settings, little is known about how prompt design impacts LLMs' compliance and accuracy. We conduct a large-scale multi-prompt experiment to test how model selection (ChatGPT, PaLM2, and Falcon7b) and prompt design features (definition inclusion, output type, explanation, and prompt length) impact the compliance and accuracy of LLM-generated annotations on four CSS tasks (toxicity, sentiment, rumor stance, and news frames). Our results show that LLM compliance and accuracy are highly prompt-dependent. For instance, prompting for numerical scores instead of labels reduces all LLMs' compliance and accuracy. The overall best prompting setup is task-dependent, and minor prompt changes can cause large changes in the distribution of generated labels. By showing that prompt design significantly impacts the quality and distribution of LLM-generated annotations, this work serves as both a warning and practical guide for researchers and practitioners.


AI in society and culture: decision making and values

Feher, Katalin, Zelenkauskaite, Asta

arXiv.org Artificial Intelligence

With the increased expectation of artificial intelligence, academic research face complex questions of human-centred, responsible and trustworthy technology embedded into society and culture. Several academic debates, social consultations and impact studies are available to reveal the key aspects of the changing human-machine ecosystem. To contribute to these studies, hundreds of related academic sources are summarized below regarding AI-driven decisions and valuable AI. In details, sociocultural filters, taxonomy of human-machine decisions and perspectives of value-based AI are in the focus of this literature review. For better understanding, it is proposed to invite stakeholders in the prepared large-scale survey about the next generation AI that investigates issues that go beyond the technology.


Is Artificial Intelligence Too Dehumanizing to Succeed?

#artificialintelligence

Does all the hype about AI sound just a little too familiar? If you're old enough to remember the first beginnings of the Internet and the dotcom bubble, you might also remember the tsunami of hype that attended these events as they unfolded. Wired magazine made endlessly breathless predictions about how the Internet would transform humanity and bring about a technologically-driven utopia. Now we're wrestling with how such a promising technology devolved into a netherworld of hacking, hate speech, exploitation of personal data, "dark webs", misinformation, political chicanery, and citizen surveillance despite these glowing promises. In the latest twist, AI is being sold in a similar way by similar players and the cultural amnesia is impressive.